Empirical Optimal Kernel for Convex Multiple Kernel Learning

نویسندگان

  • Peiyan Wang
  • Dongfeng Cai
  • Guiping Zhang
  • Yu Bai
  • Fang Cai
  • Tianhao Zhang
چکیده

Multiple kernel learning (MKL) aims at learning a combination of different kernels, instead of using a single fixed kernel, in order to better match the underlying problem. In this paper, we propose the Empirical Optimal Kernel for convex combination MKL. The Empirical Optimal Kernel is based on the theory of kernel polarization, and is the one with the best generalization ability which can be achieved from the training data in the convex combination scenario. Based on the Empirical Optimal Kernel, we propose three different algorithms: heuristic approach, optimization approach and alternating optimization approach to find the optimal combination weights. On Multiple Features Digit Recognition data set, the proposed methods achieve comparative performance as the compared methods, and have less support vectors and active kernels. On 5 UCI data sets, the Empirical Optimal Kernel based optimization approach has higher winning percentage (ttest with significant level 0.05), less active kernels and support vectors than the other MKL algorithms.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multiple Kernel Learning with Gaussianity Measures

Kernel methods are known to be effective for nonlinear multivariate analysis. One of the main issues in the practical use of kernel methods is the selection of kernel. There have been a lot of studies on kernel selection and kernel learning. Multiple kernel learning (MKL) is one of the promising kernel optimization approaches. Kernel methods are applied to various classifiers including Fisher’s...

متن کامل

Multiple Indefinite Kernel Learning for Feature Selection

Multiple kernel learning for feature selection (MKLFS) utilizes kernels to explore complex properties of features and performs better in embedded methods. However, the kernels in MKL-FS are generally limited to be positive definite. In fact, indefinite kernels often emerge in actual applications and can achieve better empirical performance. But due to the non-convexity of indefinite kernels, ex...

متن کامل

Total stability of kernel methods

Regularized empirical risk minimization using kernels and their corresponding reproducing kernel Hilbert spaces (RKHSs) plays an important role in machine learning. However, the actually used kernel often depends on one or on a few hyperparameters or the kernel is even data dependent in a much more complicated manner. Examples are Gaussian RBF kernels, kernel learning, and hierarchical Gaussian...

متن کامل

Two-Layer Multiple Kernel Learning

Multiple Kernel Learning (MKL) aims to learn kernel machines for solving a real machine learning problem (e.g. classification) by exploring the combinations of multiple kernels. The traditional MKL approach is in general “shallow” in the sense that the target kernel is simply a linear (or convex) combination of some base kernels. In this paper, we investigate a framework of Multi-Layer Multiple...

متن کامل

Multi-Task Learning Using Neighborhood Kernels

This paper introduces a new and effective algorithm for learning kernels in a Multi-Task Learning (MTL) setting. Although, we consider a MTL scenario here, our approach can be easily applied to standard single task learning, as well. As shown by our empirical results, our algorithm consistently outperforms the traditional kernel learning algorithms such as uniform combination solution, convex c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014